15 research outputs found

    On Pseudocodewords and Improved Union Bound of Linear Programming Decoding of HDPC Codes

    Full text link
    In this paper, we present an improved union bound on the Linear Programming (LP) decoding performance of the binary linear codes transmitted over an additive white Gaussian noise channels. The bounding technique is based on the second-order of Bonferroni-type inequality in probability theory, and it is minimized by Prim's minimum spanning tree algorithm. The bound calculation needs the fundamental cone generators of a given parity-check matrix rather than only their weight spectrum, but involves relatively low computational complexity. It is targeted to high-density parity-check codes, where the number of their generators is extremely large and these generators are spread densely in the Euclidean space. We explore the generator density and make a comparison between different parity-check matrix representations. That density effects on the improvement of the proposed bound over the conventional LP union bound. The paper also presents a complete pseudo-weight distribution of the fundamental cone generators for the BCH[31,21,5] code

    Efficient Linear Programming Decoding of HDPC Codes

    Full text link
    We propose several improvements for Linear Programming (LP) decoding algorithms for High Density Parity Check (HDPC) codes. First, we use the automorphism groups of a code to create parity check matrix diversity and to generate valid cuts from redundant parity checks. Second, we propose an efficient mixed integer decoder utilizing the branch and bound method. We further enhance the proposed decoders by removing inactive constraints and by adapting the parity check matrix prior to decoding according to the channel observations. Based on simulation results the proposed decoders achieve near-ML performance with reasonable complexity.Comment: Submitted to the IEEE Transactions on Communications, November 200

    Deep Ensemble of Weighted Viterbi Decoders for Tail-Biting Convolutional Codes

    Full text link
    Tail-biting convolutional codes extend the classical zero-termination convolutional codes: Both encoding schemes force the equality of start and end states, but under the tail-biting each state is a valid termination. This paper proposes a machine-learning approach to improve the state-of-the-art decoding of tail-biting codes, focusing on the widely employed short length regime as in the LTE standard. This standard also includes a CRC code. First, we parameterize the circular Viterbi algorithm, a baseline decoder that exploits the circular nature of the underlying trellis. An ensemble combines multiple such weighted decoders, each decoder specializes in decoding words from a specific region of the channel words' distribution. A region corresponds to a subset of termination states; the ensemble covers the entire states space. A non-learnable gating satisfies two goals: it filters easily decoded words and mitigates the overhead of executing multiple weighted decoders. The CRC criterion is employed to choose only a subset of experts for decoding purpose. Our method achieves FER improvement of up to 0.75dB over the CVA in the waterfall region for multiple code lengths, adding negligible computational complexity compared to the circular Viterbi algorithm in high SNRs
    corecore